Goto

Collaborating Authors

 action item


Use Zoom's AI Companion to Take Notes and Summarize Meetings

WIRED

Like caffeinated chipmunks, people sometimes don't realize they are clattering away on a keyboard. Yet, someone has to jot down action items and reminders. That "someone" doesn't have to be a human, though. Released last fall, Zoom's new AI Companion feature--included with all paid Zoom subscriptions--is like having an admin assistant on every call. The bot can summarize a meeting, create action items, and even tell you who talked the most.


Slack AI will generate transcripts and notes from huddles

Engadget

Salesforce has rolled out some new AI features for its business-focused Slack chat app designed to take over mundane chores like transcription. A key new feature is Slack AI huddle notes to "capture key takeaways and action items so users can focus on the work at hand," the company wrote. This looks like a more powerful version of a previous Slack AI feature that recaps channel highlights and generates summaries for threads in a single click. When invited to a huddle, Slack AI creates a transcript based on real-time audio and messages shared in the thread. It can also organize notes with citations, action items and files shared into a canvas.


Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation

An, Kaikai, Yang, Fangkai, Li, Liqun, Lu, Junting, Cheng, Sitao, Wang, Lu, Zhao, Pu, Cao, Lele, Lin, Qingwei, Rajmohan, Saravan, Zhang, Dongmei, Zhang, Qi

arXiv.org Artificial Intelligence

Current question answering systems leveraging retrieval augmented generation perform well in answering factoid questions but face challenges with non-factoid questions, particularly how-to queries requiring detailed step-by-step instructions and explanations. In this paper, we introduce Thread, a novel data organization paradigm that transforms documents into logic units based on their inter-connectivity. Extensive experiments across open-domain and industrial scenarios demonstrate that Thread outperforms existing data organization paradigms in RAG-based QA systems, significantly improving the handling of how-to questions.


Action-Item-Driven Summarization of Long Meeting Transcripts

Golia, Logan, Kalita, Jugal

arXiv.org Artificial Intelligence

The increased prevalence of online meetings has significantly enhanced the practicality of a model that can automatically generate the summary of a given meeting. This paper introduces a novel and effective approach to automate the generation of meeting summaries. Current approaches to this problem generate general and basic summaries, considering the meeting simply as a long dialogue. However, our novel algorithms can generate abstractive meeting summaries that are driven by the action items contained in the meeting transcript. This is done by recursively generating summaries and employing our action-item extraction algorithm for each section of the meeting in parallel. All of these sectional summaries are then combined and summarized together to create a coherent and action-item-driven summary. In addition, this paper introduces three novel methods for dividing up long transcripts into topic-based sections to improve the time efficiency of our algorithm, as well as to resolve the issue of large language models (LLMs) forgetting long-term dependencies. Our pipeline achieved a BERTScore of 64.98 across the AMI corpus, which is an approximately 4.98% increase from the current state-of-the-art result produced by a fine-tuned BART (Bidirectional and Auto-Regressive Transformers) model.


Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory

Mireshghallah, Niloofar, Kim, Hyunwoo, Zhou, Xuhui, Tsvetkov, Yulia, Sap, Maarten, Shokri, Reza, Choi, Yejin

arXiv.org Artificial Intelligence

The interactive use of large language models (LLMs) in AI assistants (at work, home, etc.) introduces a new set of inference-time privacy risks: LLMs are fed different types of information from multiple sources in their inputs and are expected to reason about what to share in their outputs, for what purpose and with whom, within a given context. In this work, we draw attention to the highly critical yet overlooked notion of contextual privacy by proposing ConfAIde, a benchmark designed to identify critical weaknesses in the privacy reasoning capabilities of instruction-tuned LLMs. Our experiments show that even the most capable models such as GPT-4 and ChatGPT reveal private information in contexts that humans would not, 39% and 57% of the time, respectively. This leakage persists even when we employ privacy-inducing prompts or chain-of-thought reasoning. Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.


Summaries, Highlights, and Action items: Design, implementation and evaluation of an LLM-powered meeting recap system

Asthana, Sumit, Hilleli, Sagih, He, Pengcheng, Halfaker, Aaron

arXiv.org Artificial Intelligence

Meetings play a critical infrastructural role in the coordination of work. In recent years, due to shift to hybrid and remote work, more meetings are moving to online Computer Mediated Spaces. This has led to new problems (e.g. more time spent in less engaging meetings) and new opportunities (e.g. automated transcription/captioning and recap support). Recent advances in large language models (LLMs) for dialog summarization have the potential to improve the experience of meetings by reducing individuals' meeting load and increasing the clarity and alignment of meeting outputs. Despite this potential, they face technological limitation due to long transcripts and inability to capture diverse recap needs based on user's context. To address these gaps, we design, implement and evaluate in-context a meeting recap system. We first conceptualize two salient recap representations -- important highlights, and a structured, hierarchical minutes view. We develop a system to operationalize the representations with dialogue summarization as its building blocks. Finally, we evaluate the effectiveness of the system with seven users in the context of their work meetings. Our findings show promise in using LLM-based dialogue summarization for meeting recap and the need for both representations in different contexts. However, we find that LLM-based recap still lacks an understanding of whats personally relevant to participants, can miss important details, and mis-attributions can be detrimental to group dynamics. We identify collaboration opportunities such as a shared recap document that a high quality recap enables. We report on implications for designing AI systems to partner with users to learn and improve from natural interactions to overcome the limitations related to personal relevance and summarization quality.


10 Best AI Tools for Meeting Notes (2023) - MarkTechPost

#artificialintelligence

Taking notes in a meeting can be a daunting task, especially if there is ample information being provided by the presenter. It is common to miss important points or misunderstand something that has been discussed. Fortunately, with technological advances, artificial intelligence tools can assist us in note-taking during meetings. These tools can help in identifying and summarizing key points, leaving us with a comprehensive record of the meeting without the need to frantically jot down every detail. Let's look at some of the most popular note-taking tools out there.


Ignite Friday Digital Marketing News (Updated Every Friday)

#artificialintelligence

This week: TikTok challenges Google and Microsoft with search ads, GPT-4 is on the way, and social media engagement rates are dropping. Here's what happened this week in digital marketing. OpenAI hasn't been in the news enough lately so it's time for a fresh update. The next version of GPT, unimaginatively called GPT-4, will go live soon. In fact, it might already be live by the time you read this. As far as the updates that make it more worthwhile than GPT-3, it's got multimodal functionality. That means it supports text, speech, images, and even video. GPT-4 also works across multiple languages. If you've noticed that your social media engagement rates are on the decline, you're not alone.


Otter.ai overhauls its popular transcription platform

#artificialintelligence

AI-powered transcription service Otter.ai is today announcing a complete overhaul of its platform, offering workers new intelligently generated in-meeting action items, alongside a centralized environment for all meeting transcriptions and notes. Building on the launch of its Otter Assistant in August 2021, the new update aims to streamline communication further by continuing to use conversational AI to improve both the in-meeting and post-meeting experience. The update consists of a new-look home feed, which now acts as a centralized hub for all meeting and post-meeting actions. Users can connect their Google or Microsoft Outlook calendars to Otter to keep track of upcoming meetings, use it to directly join meetings or schedule their Otter Assistant to join. Shared conversations, highlights and comments, and tagged action items will also all be accessible from the new home feed.


Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability

Curmei, Mihaela, Dean, Sarah, Recht, Benjamin

arXiv.org Machine Learning

In this work, we consider how preference models in interactive recommendation systems determine the availability of content and users' opportunities for discovery. We propose an evaluation procedure based on stochastic reachability to quantify the maximum probability of recommending a target piece of content to an user for a set of allowable strategic modifications. This framework allows us to compute an upper bound on the likelihood of recommendation with minimal assumptions about user behavior. Stochastic reachability can be used to detect biases in the availability of content and diagnose limitations in the opportunities for discovery granted to users. We show that this metric can be computed efficiently as a convex program for a variety of practical settings, and further argue that reachability is not inherently at odds with accuracy. We demonstrate evaluations of recommendation algorithms trained on large datasets of explicit and implicit ratings. Our results illustrate how preference models, selection rules, and user interventions impact reachability and how these effects can be distributed unevenly.